1,222 research outputs found

    Uses of the pitch-scaled harmonic filter in speech processing

    No full text
    The pitch-scaled harmonic filter (PSHF) is a technique for decomposing speech signals into their periodic and aperiodic constituents, during periods of phonation. In this paper, the use of the PSHF for speech analysis and processing tasks is described. The periodic component can be used as an estimate of the part attributable to voicing, and the aperiodic component can act as an estimate of that attributable to turbulence noise, i.e., from fricative, aspiration and plosive sources. Here we present the algorithm for separating the periodic and aperiodic components from the pitch-scaled Fourier transform of a short section of speech, and show how to derive signals suitable for time-series analysis and for spectral analysis. These components can then be processed in a manner appropriate to their source type, for instance, extracting zeros as well as poles from the aperiodic spectral envelope. A summary of tests on synthetic speech-like signals demonstrates the robustness of the PSHF's performance to perturbations from additive noise, jitter and shimmer. Examples are given of speech analysed in various ways: power spectrum, short-time power and short-time harmonics-to-noise ratio, linear prediction and mel-frequency cepstral coefficients. Besides being valuable for speech production and perception studies, the latter two analyses show potential for incorporation into speech coding and speech recognition systems. Further uses of the PSHF are revealing normally-obscured acoustic features, exploring interactions of turbulence-noise sources with voicing, and pre-processing speech to enhance subsequent operations

    Pitch-scaled estimation of simultaneous voiced and turbulence-noise components in speech

    Full text link

    Magnetic Flux of EUV Arcade and Dimming Regions as a Relevant Parameter for Early Diagnostics of Solar Eruptions - Sources of Non-Recurrent Geomagnetic Storms and Forbush Decreases

    Full text link
    This study aims at the early diagnostics of geoeffectiveness of coronal mass ejections (CMEs) from quantitative parameters of the accompanying EUV dimming and arcade events. We study events of the 23th solar cycle, in which major non-recurrent geomagnetic storms (GMS) with Dst <-100 nT are sufficiently reliably identified with their solar sources in the central part of the disk. Using the SOHO/EIT 195 A images and MDI magnetograms, we select significant dimming and arcade areas and calculate summarized unsigned magnetic fluxes in these regions at the photospheric level. The high relevance of this eruption parameter is displayed by its pronounced correlation with the Forbush decrease (FD) magnitude, which, unlike GMSs, does not depend on the sign of the Bz component but is determined by global characteristics of ICMEs. Correlations with the same magnetic flux in the solar source region are found for the GMS intensity (at the first step, without taking into account factors determining the Bz component near the Earth), as well as for the temporal intervals between the solar eruptions and the GMS onset and peak times. The larger the magnetic flux, the stronger the FD and GMS intensities are and the shorter the ICME transit time is. The revealed correlations indicate that the main quantitative characteristics of major non-recurrent space weather disturbances are largely determined by measurable parameters of solar eruptions, in particular, by the magnetic flux in dimming areas and arcades, and can be tentatively estimated in advance with a lead time from 1 to 4 days. For GMS intensity, the revealed dependencies allow one to estimate a possible value, which can be expected if the Bz component is negative.Comment: 27 pages, 5 figures. Accepted for publication in Solar Physic

    Many-body aspects of positron annihilation in the electron gas

    Full text link
    We investigate positron annihilation in electron liquid as a case study for many-body theory, in particular the optimized Fermi Hypernetted Chain (FHNC-EL) method. We examine several approximation schemes and show that one has to go up to the most sophisticated implementation of the theory available at the moment in order to get annihilation rates that agree reasonably well with experimental data. Even though there is basically just one number to look at, the electron-positron pair distribution function at zero distance, it is exactly this number that dictates how the full pair distribution behaves: In most cases, it falls off monotonously towards unity as the distance increases. Cases where the electron-positron pair distribution exhibits a dip are precursors to the formation of bound electron--positron pairs. The formation of electron-positron pairs is indicated by a divergence of the FHNC-EL equations, from this we can estimate the density regime where positrons must be localized. This occurs in our calculations in the range 9.4 <= r_s <=10, where r_s is the dimensionless density parameter of the electron liquid.Comment: To appear in Phys. Rev. B (2003

    Nano X Image Guidance in radiation therapy: feasibility study protocol for cone beam computed tomography imaging with gravity-induced motion.

    Get PDF
    Background: This paper describes the protocol for the Nano X Image Guidance (Nano X IG) trial, a single-institution, clinical imaging study. The Nano X is a prototype fxed-beam radiotherapy system developed to investigate the feasibility of a low-cost, compact radiotherapy system to increase global access to radiation therapy. This study aims to assess the feasibility of volumetric image guidance with cone beam computed tomography (CBCT) acquired during horizontal patient rotation on the Nano X radiotherapy system. Methods: In the Nano X IG study, we will determine whether radiotherapy image guidance can be performed with the Nano X radiotherapy system where the patient is horizontally rotated while scan projections are acquired. We will acquire both conventional CBCT scans and Nano X CBCT scans for 30 patients aged 18 and above and receiving radiotherapy for head/neck or upper abdomen cancers. For each patient, a panel of experts will assess the image quality of Nano X CBCT scans against conventional CBCT scans. Each patient will receive two Nano X CBCT scans to determine the image quality reproducibility, the extent and reproducibility of patient motion and assess patient tolerance. Discussion: Fixed-beam radiotherapy systems have the potential to help ease the current shortfall and increase global access to radiotherapy treatment. Advances in image guidance could facilitate fxed-beam radiotherapy using horizontal patient rotation. The efcacy of this radiotherapy approach is dependent on our ability to image and adapt to motion due to rotation and for patients to tolerate rotation during treatment.Emily Debrot, Paul Liu, Mark Gardner, Soo Min Heng, Chin Hwa Chan, Stephanie Corde, Simon Downes, Michael Jackson, and Paul Keal

    Helicity Analysis of Semileptonic Hyperon Decays Including Lepton Mass Effects

    Full text link
    Using the helicity method we derive complete formulas for the joint angular decay distributions occurring in semileptonic hyperon decays including lepton mass and polarization effects. Compared to the traditional covariant calculation the helicity method allows one to organize the calculation of the angular decay distributions in a very compact and efficient way. In the helicity method the angular analysis is of cascade type, i.e. each decay in the decay chain is analyzed in the respective rest system of that particle. Such an approach is ideally suited as input for a Monte Carlo event generation program. As a specific example we take the decay Ξ0Σ++l+νˉl\Xi^0 \to \Sigma^+ + l^- + \bar{\nu}_l (l=e,μl^-=e^-, \mu^-) followed by the nonleptonic decay Σ+p+π0\Sigma^+ \to p + \pi^0 for which we show a few examples of decay distributions which are generated from a Monte Carlo program based on the formulas presented in this paper. All the results of this paper are also applicable to the semileptonic and nonleptonic decays of ground state charm and bottom baryons, and to the decays of the top quark.Comment: Published version. 40 pages, 11 figures included in the text. Typos corrected, comments added, references added and update

    Results of the BiPo-1 prototype for radiopurity measurements for the SuperNEMO double beta decay source foils

    Get PDF
    The development of BiPo detectors is dedicated to the measurement of extremely high radiopurity in 208^{208}Tl and 214^{214}Bi for the SuperNEMO double beta decay source foils. A modular prototype, called BiPo-1, with 0.8 m2m^2 of sensitive surface area, has been running in the Modane Underground Laboratory since February, 2008. The goal of BiPo-1 is to measure the different components of the background and in particular the surface radiopurity of the plastic scintillators that make up the detector. The first phase of data collection has been dedicated to the measurement of the radiopurity in 208^{208}Tl. After more than one year of background measurement, a surface activity of the scintillators of A\mathcal{A}(208^{208}Tl) == 1.5 μ\muBq/m2^2 is reported here. Given this level of background, a larger BiPo detector having 12 m2^2 of active surface area, is able to qualify the radiopurity of the SuperNEMO selenium double beta decay foils with the required sensitivity of A\mathcal{A}(208^{208}Tl) << 2 μ\muBq/kg (90% C.L.) with a six month measurement.Comment: 24 pages, submitted to N.I.M.

    Spectral modeling of scintillator for the NEMO-3 and SuperNEMO detectors

    Full text link
    We have constructed a GEANT4-based detailed software model of photon transport in plastic scintillator blocks and have used it to study the NEMO-3 and SuperNEMO calorimeters employed in experiments designed to search for neutrinoless double beta decay. We compare our simulations to measurements using conversion electrons from a calibration source of 207Bi\rm ^{207}Bi and show that the agreement is improved if wavelength-dependent properties of the calorimeter are taken into account. In this article, we briefly describe our modeling approach and results of our studies.Comment: 16 pages, 10 figure

    Extreme events and predictability of catastrophic failure in composite materials and in the Earth

    Get PDF
    Despite all attempts to isolate and predict extreme earthquakes, these nearly always occur without obvious warning in real time: fully deterministic earthquake prediction is very much a ‘black swan’. On the other hand engineering-scale samples of rocks and other composite materials often show clear precursors to dynamic failure under controlled conditions in the laboratory, and successful evacuations have occurred before several volcanic eruptions. This may be because extreme earthquakes are not statistically special, being an emergent property of the process of dynamic rupture. Nevertheless, probabilistic forecasting of event rate above a given size, based on the tendency of earthquakes to cluster in space and time, can have significant skill compared to say random failure, even in real-time mode. We address several questions in this debate, using examples from the Earth (earthquakes, volcanoes) and the laboratory, including the following. How can we identify ‘characteristic’ events, i.e. beyond the power law, in model selection (do dragon-kings exist)? How do we discriminate quantitatively between stationary and non-stationary hazard models (is a dragon likely to come soon)? Does the system size (the size of the dragon’s domain) matter? Are there localising signals of imminent catastrophic failure we may not be able to access (is the dragon effectively invisible on approach)? We focus on the effect of sampling effects and statistical uncertainty in the identification of extreme events and their predictability, and highlight the strong influence of scaling in space and time as an outstanding issue to be addressed by quantitative studies, experimentation and models
    corecore